9 research outputs found

    CERiL: Continuous Event-based Reinforcement Learning

    Full text link
    This paper explores the potential of event cameras to enable continuous time reinforcement learning. We formalise this problem where a continuous stream of unsynchronised observations is used to produce a corresponding stream of output actions for the environment. This lack of synchronisation enables greatly enhanced reactivity. We present a method to train on event streams derived from standard RL environments, thereby solving the proposed continuous time RL problem. The CERiL algorithm uses specialised network layers which operate directly on an event stream, rather than aggregating events into quantised image frames. We show the advantages of event streams over less-frequent RGB images. The proposed system outperforms networks typically used in RL, even succeeding at tasks which cannot be solved traditionally. We also demonstrate the value of our CERiL approach over a standard SNN baseline using event streams.Comment: 9 pages, 10 figure

    There and Back Again: Self-supervised Multispectral Correspondence Estimation

    Full text link
    Across a wide range of applications, from autonomous vehicles to medical imaging, multi-spectral images provide an opportunity to extract additional information not present in color images. One of the most important steps in making this information readily available is the accurate estimation of dense correspondences between different spectra. Due to the nature of cross-spectral images, most correspondence solving techniques for the visual domain are simply not applicable. Furthermore, most cross-spectral techniques utilize spectra-specific characteristics to perform the alignment. In this work, we aim to address the dense correspondence estimation problem in a way that generalizes to more than one spectrum. We do this by introducing a novel cycle-consistency metric that allows us to self-supervise. This, combined with our spectra-agnostic loss functions, allows us to train the same network across multiple spectra. We demonstrate our approach on the challenging task of dense RGB-FIR correspondence estimation. We also show the performance of our unmodified network on the cases of RGB-NIR and RGB-RGB, where we achieve higher accuracy than similar self-supervised approaches. Our work shows that cross-spectral correspondence estimation can be solved in a common framework that learns to generalize alignment across spectra

    Ta-DAH: Task Driven Automated Hardware Design of Free-Flying Space Robots

    Get PDF
    Space robots will play an integral part in exploring the universe and beyond. A correctly designed space robot will facilitate OOA, satellite servicing and ADR. However, problems arise when trying to design such a system as it is a highly complex multidimensional problem into which there is little research. Current design techniques are slow and specific to terrestrial manipulators. This paper presents a solution to the slow speed of robotic hardware design, and generalizes the technique to free-flying space robots. It presents Ta-DAH Design, an automated design approach that utilises a multi-objective cost function in an iterative and automated pipeline. The design approach leverages prior knowledge and facilitates the faster output of optimal designs. The result is a system that can optimise the size of the base spacecraft, manipulator and some key subsystems for any given task. Presented in this work is the methodology behind Ta-DAH Design and a number optimal space robot designs

    Methods for Autonomous Calibration, Mapping and Navigation of Vehicles

    No full text
    The promise of autonomous vehicles is a popular and exciting concept that has seen considerable investment over the last decade. However, most of this investment is in the automotive sector with far less investment in autonomous maritime. This thesis explores the application of computer vision and robotics to autonomous navigation and docking in a maritime domain.Sensor fusion is a core robotics concept allowing the outputs from multiple complementary sensors to be combined for robust localisation and perception. An essential first step of which is extrinsic sensor calibration, which determines the relative position and orientation of those sensors. However, once they are mounted to a vehicle it is usually difficult and unwieldy to carry out. As the first main contribution, this thesis proposes a method to automatically perform calibration of arbitrary sensor types during normal operation. In addition to enabling a non-specialist to perform extrinsic calibration, it can also provide automatic updates in the case of sensor failure. Furthermore, it allows the relative scale factor to be inferred, facilitating the inclusion of non-metric sensors into the fusion framework without an explicit metric scaling step.Autonomous vehicles can only navigate unseen environments if they are able to build a map from which they can perform localisation. This is a common robotics problem traditionally solved with depth sensors and/or 3D reconstruction. However, the operational range of many depth sensors is insufficient for maritime scenarios. The second contribution of this thesis proposes a bird's-eye-view mapping approach using only monocular cameras. By projecting and accumulating semantic information, a probabilistic map is constructed that represents navigable and non-navigable areas. The advantages of this approach versus depth sensors are demonstrated in the marine, robotic, and automotive domains.Any perception system with a vision-based backbone is vulnerable to adverse lighting conditions, whether that is low light, reflections or fog. These conditions are especially prevalent in the maritime domain. Other spectra, such as infra-red, are less affected but rarely used in automotive. The third contribution solves the stereo correspondence problem between visible, near-infrared, and thermal images. By learning a common representation between unaligned stereo pairs in a self-supervised manner, the proposed approach provides stereo cues between different arbitrary spectra. This also allows annotations, which are often easier to obtain for the visible spectrum, to be transferred to other sensors easing the collection of multispectral datasets.The final chapter brings the contributions in automated sensor calibration, and semantic ground plane mapping together with path planning to demonstrate autonomous functionality. The presented experiments show significant performance improvements over what is currently available in a real-world practical application.This thesis advances the field of robotics and autonomous vehicles by exploring approaches which cater to both the maritime and automotive domains. It also shows the viability of cost effective monocular cameras as a sensor

    EVReflex: Dense Time-to-Impact Prediction for Event-based Obstacle Avoidance

    No full text
    The broad scope of obstacle avoidance has led to many kinds of computer vision-based approaches. Despite its popularity, it is not a solved problem. Traditional computer vision techniques using cameras and depth sensors often focus on static scenes, or rely on priors about the obstacles. Recent developments in bio-inspired sensors present event cameras as a compelling choice for dynamic scenes. Although these sensors have many advantages over their frame-based counterparts, such as high dynamic range and temporal resolution, event based perception has largely remained in 2D. This often leads to solutions reliant on heuristics and specific to a particular task. We show that the fusion of events and depth overcomes the failure cases of each individual modality when performing obstacle avoidance. Our proposed approach unifies event camera and lidar streams to estimate metric Time-To-Impact (TTI) without prior knowledge of the scene geometry or obstacles. In addition, we release an extensive event-based dataset with six visual streams spanning over 700 scanned scenes

    A Robust Extrinsic Calibration Framework for Vehicles with Unscaled Sensors

    Get PDF
    Accurate extrinsic sensor calibration is essential for both autonomous vehicles and robots. Traditionally this is an involved process requiring calibration targets, known fiducial markers and is generally performed in a lab. Moreover, even a small change in the sensor layout requires recalibration. With the anticipated arrival of consumer autonomous vehicles, there is demand for a system which can do this automatically, after deployment and without specialist human expertise. To solve these limitations, we propose a flexible framework which can estimate extrinsic parameters without an explicit calibration stage, even for sensors with unknown scale. Our first contribution builds upon standard hand-eye calibration by jointly recovering scale. Our second contribution is that our system is made robust to imperfect and degenerate sensor data, by collecting independent sets of poses and automatically selecting those which are most ideal. We show that our approach’s robustness is essential for the target scenario. Unlike previous approaches, ours runs in real time and constantly estimates the extrinsic transform. For both an ideal experimental setup and a real use case, comparison against these approaches shows that we outperform the state-of-the-art. Furthermore, we demonstrate that the recovered scale may be applied to the full trajectory, circumventing the need for scale estimation via sensor fusion.</p

    ORCHID: Optimisation of Robotic Control and Hardware In Design using Reinforcement Learning

    No full text
    The successful performance of any system isdependant on the hardware of the agent, which is typicallyimmutable during RL training. In this work, we presentORCHID (Optimisation of Robotic Control and Hardware InDesign) which allows for truly simultaneous optimisation ofhardware and control parameters in an RL pipeline. We showthat by forming a complex differential path through a trajectoryrollout we can leverage a vast amount of information from thesystem that was previously lost in the ‘black-box’ environment.Combining this with a novel hardware-conditioned critic network minimises variance during training and ensures stableupdates are made. This allows for refinements to be made toboth the morphology and control parameters simultaneously.The result is an efficient and versatile approach to holistic robotdesign, that brings the final system nearer to true optimality.We show improvements in performance across 4 different testenvironments with two different control algorithms - in all experiments the maximum performance achieved with ORCHIDis shown to be unattainable using only policy updates with thedefault design. We also show how re-designing a robot usingORCHID in simulation, transfers to a vast improvement in theperformance of a real-world robot
    corecore